1,196 research outputs found

    A semidefinite program for unbalanced multisection in the stochastic block model

    Full text link
    We propose a semidefinite programming (SDP) algorithm for community detection in the stochastic block model, a popular model for networks with latent community structure. We prove that our algorithm achieves exact recovery of the latent communities, up to the information-theoretic limits determined by Abbe and Sandon (2015). Our result extends prior SDP approaches by allowing for many communities of different sizes. By virtue of a semidefinite approach, our algorithms succeed against a semirandom variant of the stochastic block model, guaranteeing a form of robustness and generalization. We further explore how semirandom models can lend insight into both the strengths and limitations of SDPs in this setting.Comment: 29 page

    Computational Hardness of Certifying Bounds on Constrained PCA Problems

    Get PDF
    Given a random n×n symmetric matrix W drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form x⊤Wx over all vectors x in a constraint set S⊂Rn. For a certain class of normalized constraint sets S we show that, conditional on certain complexity-theoretic assumptions, there is no polynomial-time algorithm certifying a better upper bound than the largest eigenvalue of W. A notable special case included in our results is the hypercube S={±1/n−−√}n, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics. Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for identifying computational thresholds was proposed in a sequence of recent works on the sum-of-squares hierarchy, and is believed to be correct for a large class of problems. Our proof can be seen as constructing a distribution over symmetric matrices that appears computationally indistinguishable from the GOE, yet is supported on matrices whose maximum quadratic form over x∈S is much larger than that of a GOE matrix.ISSN:1868-896

    Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials

    Full text link
    Suppose we are given an nn-dimensional order-3 symmetric tensor T(Rn)3T \in (\mathbb{R}^n)^{\otimes 3} that is the sum of rr random rank-1 terms. The problem of recovering the rank-1 components is possible in principle when rn2r \lesssim n^2 but polynomial-time algorithms are only known in the regime rn3/2r \ll n^{3/2}. Similar "statistical-computational gaps" occur in many high-dimensional inference tasks, and in recent years there has been a flurry of work on explaining the apparent computational hardness in these problems by proving lower bounds against restricted (yet powerful) models of computation such as statistical queries (SQ), sum-of-squares (SoS), and low-degree polynomials (LDP). However, no such prior work exists for tensor decomposition, largely because its hardness does not appear to be explained by a "planted versus null" testing problem. We consider a model for random order-3 tensor decomposition where one component is slightly larger in norm than the rest (to break symmetry), and the components are drawn uniformly from the hypercube. We resolve the computational complexity in the LDP model: O(logn)O(\log n)-degree polynomial functions of the tensor entries can accurately estimate the largest component when rn3/2r \ll n^{3/2} but fail to do so when rn3/2r \gg n^{3/2}. This provides rigorous evidence suggesting that the best known algorithms for tensor decomposition cannot be improved, at least by known approaches. A natural extension of the result holds for tensors of any fixed order k3k \ge 3, in which case the LDP threshold is rnk/2r \sim n^{k/2}.Comment: 42 pages; STOC 202

    Estimation under group actions: recovering orbits from invariants

    Full text link
    Motivated by geometric problems in signal processing, computer vision, and structural biology, we study a class of orbit recovery problems where we observe very noisy copies of an unknown signal, each acted upon by a random element of some group (such as Z/p or SO(3)). The goal is to recover the orbit of the signal under the group action in the high-noise regime. This generalizes problems of interest such as multi-reference alignment (MRA) and the reconstruction problem in cryo-electron microscopy (cryo-EM). We obtain matching lower and upper bounds on the sample complexity of these problems in high generality, showing that the statistical difficulty is intricately determined by the invariant theory of the underlying symmetry group. In particular, we determine that for cryo-EM with noise variance σ2\sigma^2 and uniform viewing directions, the number of samples required scales as σ6\sigma^6. We match this bound with a novel algorithm for ab initio reconstruction in cryo-EM, based on invariant features of degree at most 3. We further discuss how to recover multiple molecular structures from heterogeneous cryo-EM samples.Comment: 54 pages. This version contains a number of new result

    Quantum repeaters with individual rare-earth ions at telecommunication wavelengths

    Get PDF
    We present a quantum repeater scheme that is based on individual erbium and europium ions. Erbium ions are attractive because they emit photons at telecommunication wavelength, while europium ions offer exceptional spin coherence for long-term storage. Entanglement between distant erbium ions is created by photon detection. The photon emission rate of each erbium ion is enhanced by a microcavity with high Purcell factor, as has recently been demonstrated. Entanglement is then transferred to nearby europium ions for storage. Gate operations between nearby ions are performed using dynamically controlled electric-dipole coupling. These gate operations allow entanglement swapping to be employed in order to extend the distance over which entanglement is distributed. The deterministic character of the gate operations allows improved entanglement distribution rates in comparison to atomic ensemble-based protocols. We also propose an approach that utilizes multiplexing in order to enhance the entanglement distribution rate.Comment: 13 pages, 4 figure

    Computational Barriers to Estimation from Low-Degree Polynomials

    Full text link
    One fundamental goal of high-dimensional statistics is to detect or recover structure from noisy data. In many cases, the data can be faithfully modeled by a planted structure (such as a low-rank matrix) perturbed by random noise. But even for these simple models, the computational complexity of estimation is sometimes poorly understood. A growing body of work studies low-degree polynomials as a proxy for computational complexity: it has been demonstrated in various settings that low-degree polynomials of the data can match the statistical performance of the best known polynomial-time algorithms for detection. While prior work has studied the power of low-degree polynomials for the task of detecting the presence of hidden structures, it has failed to address the estimation problem in settings where detection is qualitatively easier than estimation. In this work, we extend the method of low-degree polynomials to address problems of estimation and recovery. For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree-D polynomial. To our knowledge, this is the first instance in which the low-degree polynomial method can establish low-degree hardness of recovery problems where the associated detection problem is easy. As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems, resolving (in the low-degree framework) open problems about the computational complexity of recovery in both cases.Comment: 38 page
    corecore